53 research outputs found

    Fine-tuning the fuzziness of strong fuzzy partitions through PSO

    Get PDF
    We study the influence of fuzziness of trapezoidal fuzzy sets in the strong fuzzy partitions (SFPs) that constitute the database of a fuzzy rule-based classifier. To this end, we develop a particular representation of the trapezoidal fuzzy sets that is based on the concept of cuts, which are the cross-points of fuzzy sets in a SFP and fix the position of the fuzzy sets in the Universe of Discourse. In this way, it is possible to isolate the parameters that characterize the fuzziness of the fuzzy sets, which are subject to fine-tuning through particle swarm optimization (PSO). In this paper, we propose a formulation of the parameter space that enables the exploration of all possible levels of fuzziness in a SFP. The experimental results show that the impact of fuzziness is strongly dependent on the defuzzification procedure used in fuzzy rule-based classifiers. Fuzziness has little influence in the case of winner-takes-all defuzzification, while it is more influential in weighted sum defuzzification, which however may pose some interpretation problems

    An incremental algorithm for granular counting with possibility theory

    No full text
    Data counting is non-trivial when data are uncertain. In the case of uncertainty due to incompleteness, possibility theory can be used to define a granular counting model. Two algorithms were proposed in literature to compute granular counting: exact granular counting, with quadratic time complexity, and approximate granular counting, with linear time complexity. However, both algorithms require that all data are available before counting. This paper presents an incremental granular counting algorithm which provides an efficient and exact computation of the granular count without the need of having all data available, thus opening the door to applications involving data streams

    Interpretability of Fuzzy Systems

    No full text

    capp. 10, 12

    No full text

    Descriptive Stability of Fuzzy Rule-Based Systems

    No full text
    Fuzzy Rule-Based Systems (FRBSs) are endowed with a knowledge base that can be used to provide model and outcome explanations. Usually, FRBSs are acquired from data by applying some learning methods: it is expected that, when modeling the same phenomenon, the FRBSs resulting from the application of a learning method should provide almost the same explanations. This requires a stability in the description of the knowledge bases that can be evaluated through the proposed measure of Descriptive Stability. The measure has been applied on three methods for generating FRBSs based on three benchmark datasets. The results show that, under same settings, different methods may produce FRBSs with varying stability, which impacts on their ability to provide trustful explanations

    A Bayesian Interpretation of Fuzzy C-Means

    No full text
    In Explainable Artificial Intelligence, the interpretation of the decisions provided by a model is of primary importance. In this context, we consider Fuzzy C-Means (FCM), which is a clustering algorithm that induces a model from data by assigning, to each data-point, a degree of membership to each cluster such that the sum of memberships is one. A fuzzification parameter is also used to tune the degree of fuzziness of clusters. The distribution of membership degrees suggests an interpretation of membership degrees within the Probability Theory. This paper shows that the membership degrees resulting from FCM can be interpreted as posterior probabilities derived from a Bayesian model, which assumes that data are generated through a specific probability density function. The results give a clear interpretation of the membership degrees of FCM, as well as its fuzzification parameter, within a sound theoretical framework, and shed light on possible extensions of the algorithm

    Possibilistic Bounds for Granular Counting

    No full text
    Uncertain data are observations that cannot be uniquely mapped to a referent. In the case of uncertainty due to incompleteness, possibility theory can be used as an appropriate model for processing such data. In particular, granular counting is a way to count data in presence of uncertainty represented by possibility distributions. Two algorithms were proposed in literature to compute granular counting: exact granular counting, with quadratic time complexity, and approximate granular counting, with linear time complexity. This paper extends approximate granular counting by computing bounds for exact granular count. In this way, the efficiency of approximate granular count is combined with certified bounds whose width can be adjusted in accordance to user needs

    Granular counting of uncertain data

    No full text
    We propose a definition of granular count realized in the presence of uncertain data modeled through possibility distributions. We show that the resulting counts are fuzzy intervals in the domain of natural numbers. Based on this result, we devise two algorithms for granular counting: an exact counting algorithm with quadratic-time complexity and an approximate counting algorithm with linear-time complexity. We compare the two algorithms on synthetic data and show their application to a Bioinformatics scenario concerning the assessment of gene expressions in cells
    corecore